Roundtable for institutions and organisations working with youth
Facilitator:
Martin Nekola (Faculty of Czech Sciences, Czech Evaluation Society)
Panellists:
Jana Drlíková (Head of the National Coordination Authority of the Ministry of Regional Development (MMR NOK) and President of the Czech Evaluation Society)
Martin Dytrych (Head of the Evaluation Unit at the Ministry of Labour and Social Affairs)
Barbora Hořavová (Programme Director of OSF Foundation)
Bob Kartous (Director of the Prague Innovation Institute)
Michaela Mrózková (Head of Evaluation Department of the Ministry of Education, Youth and Sports)
Zdeněk Slejška (Director of the Eduzměna Foundation and Ashoka Fellow 2013)
Participants of discussions:
Professional evaluators and representatives of leading Czech organizations and institutions working with youth
This article is a transcription of a roundtable discussion held on 23 February 2022, which was attended by professional evaluators and representatives of leading Czech organisations and institutions working with youth.
The roundtable was held online under the auspices of the Czech Evaluation Society (CES).
The aim of the roundtable was to look at evaluation through the eyes of donors and beneficiaries, distinguishing in the first case also the perspective of institutions and foundations, and to work with the following questions:
- What should be the aim of the evaluation from your point of view? What do you want (or need) to learn from it?
- What do you use the evaluation results for?
- To what extent and under what circumstances do donors/governing bodies trust the evaluation results submitted by the supported organisation?
- What problems are you facing?
- Can you share any examples of good or bad practice? Do you have specific recommendations or practices that could be useful for the supported organisation?
- Can the results of the evaluation in fact cause damage for a beneficiary?
Brief inputs were made by the panellists at the beginning – summarised below:
Jana Drlíková
- NOK outsources evaluations + carries out its own evaluations, but is not a typical donor; however, Jana Drlíková can share also her experience as gained from her work in the CEC.
Martin Dytrych
- Evaluations are the main activity of the department; it mainly deals with the evaluation of projects and calls of the Operational Programme Employment, i.e., mainly the EU funding programmes.
- The department used to have evaluations done externally, now they manage to perform it internally to more extent.
Barbora Hořavová
- Evaluation always evolves and new tools and their interconnections emerge, but there remains a key question – to be aware of what I am doing and why. The conclusion of the evaluation should be who will benefit from an intervention, what will be improved and how was this achieved.
- OSF conducts internal and external evaluation, and guides its grantees to do so as well.
- Quality evaluation is expensive.
- Organisations should realise evaluation primarily for themselves, regardless of donors – the learning process it brings is essential.
- Recently, there has been a boom in evaluation.
- In the Czech Republic – quality evaluation somewhat delayed due to the European Structural and Investment Funds, where evaluation is focused on indicators; people tend to forget that the output of evaluation should be learning and that mistakes or even a failure are of importance.
Michaela Mrózková
- At the Ministry of Education, they struggle quite a bit with the proliferation of evaluation activities and the fact that indicators are simply numbers without any further characterization and linkage to the theory of change and the programmes as such; that is why the ministry is supplementing this with their own evaluation, which provides more information; however, they are still learning how to do it well.
- They are also gradually learning to collect data that will enable impact evaluation.
- The ministry has been gradually abandoning massive external evaluations and has been trying to solve everything internally.
Bob Kartous
- Has experience from both perspectives, as a representative of both a donor and a non-profit project developer.
- What is the purpose of evaluation? To fulfil indicators or to verify the fulfilment of social responsibility of organisations, institutions and corporations? There is a danger that indicators can be set to be in line with the marketing objectives of, for example, corporations.
Zdeněk Slejška
- Also, can speak from the position of both a donor and a recipient, and beyond that, works closely with their foundation’s grantees (they use the result of evaluation for their own learning).
PANEL DISCUSSION
- BH: Indicators cannot be eliminated completely, but the question is how the donor sets them; they represent a certain necessity that we have to fulfil. However, we can also set our own indicators, just for us – and OSF is trying to motivate their grantees to do so.
- JD: Evaluation is about looking for certain truth, some kind of reality – it makes sense if we really want to find out something (even what we don’t like to know); if we are not ready for evaluation, it can bring unpleasant fears.
- JD: Compared to Czech projects, those funded from European sources are much more evaluated – there is much less data on those funded by the Czech state; but it is true that indicators play a major role here.
- JD: The indicators set by the European Commission are very general, the individual managing authorities set other sub-indicators themselves; however, in the large volumes (thousands of projects) it is necessary to aggregate acquired data. At the same time, it is necessary to make sure that what we do not monitor too much – this would result in an inadequate administrative burden for the beneficiaries.
- ZS: The debate around indicators is reminiscent of the debate around learning outcomes – it is more about whether pupils have acquired certain factual knowledge, but no longer about the extent to which they are equipped with other skills; if there is an effort to debate types of evaluation other than indicator-based evaluation, it will lead to better quality; this debate is already underway, which is good. It is also beneficial that the EU makes us evaluate more than in past.
- MN: We need to ask who we want to report the results of our work to – for example, parents may be interested in verbal evaluation of school results, but elsewhere we need to report grades, average results for the whole school, etc.
- MM: The European Commission requires aggregable data, because they work in large numbers obtained from different Member States, i.e., numbers of beneficiaries, participants, etc.; at a lower level, the data collected is tailored to the projects, but still needs to be aggregable; the Ministry of Education deals with this by repeatedly collecting the same type of data to identify shifts in certain areas. This can be combined with other types of data, e.g., international surveys; continuity is important in this because in education it takes a long time before real impacts can be perceived.
- BK: It is important what we expect from evaluation: we can elaborate it in depth so that it evaluates all the formal outputs in great detail to prove that the project has been implemented correctly, but it still would not say anything about impact. However, if there is an effort among the implementers just to meet these formalistic needs and there is not enough trust that the implementers are implementing the project to really bring positive change, then there will never emerge any new system – one that would not be “learnable”. In other words, it is not so much about the evaluation itself, but about the level of trust and accountability in a given society – in societies where the level of both is higher, evaluation systems look different.
- MN: Rather than producing sophisticated evaluation models, it would be better to cultivate the relationship between donor and recipient – but how to do that? BK: perhaps better with private donors – with systemic projects managed by governing bodies it is hard to focus on real impact in evaluation; only over time do trusted and untrusted organisations profile themselves to donors.
- Suggestion from the audience (NK, one of the Youth Impact grantees): evaluation should be a mandatory part of innovative projects only and its design should be attached to the grant application, based on quality standards (e.g., those made by the Czech Evaluation Society).
- JD: In case that an organisation is a learning organisation, there is no need for (impact) evaluation so often – just ongoing feedback and functioning communication with stakeholders; only at certain moments (more complexity, monitoring impact, etc.) does it make sense to do evaluation as such; ideally evaluation should never be an end in itself – if we don’t have well set up ongoing monitoring and feedback, then we are not used to asking the right questions and working with the answers; it takes time. The problem is also that in European projects there is still a lot of emphasis on controls and audits, we don’t ask as much about impacts. The solution would be more trust between donor, beneficiary and evaluator – if someone has a certain vision of the world, they will always trust their vision more, even if they have objective data.
- MN: He refers to a post by Vláďa Kváča, who raises the question whether it would be better to do without evaluation altogether or to conceive it in a completely different way (see the sources section below, note by author).
- JD: If we evaluate really complex systems, the evaluation doesn’t answer everything we need; it takes a really long time for an external evaluator to even understand what the project is about and identify what is not working and what it all affects; it needs to be supplemented with other tools.
- MD: He adds what the situation looks like at the Ministry – a department with 10 staff sets the evaluation plan for an operational programme with 10,000 projects and more than 600,000 people supported – it brings about a different level of what can be monitored. However, here too there is a gradual shift – evaluation is being monitored more for innovative projects; the problem is that for systemic projects, where hundreds of millions are invested, we find that the capacity to evaluate and recognise their long-term impact is low. The attempt is to create a compromise – to evaluate what is a priority for the ministry and at the same time to do evaluation only where it makes sense (for some types of projects it will no longer be done, etc.). Indicators – there is a lot of room for innovation here, but the Ministry has almost no influence on setting them. Concerning internal evaluations – for small organisations this is certainly preferable; however, in the Czech Republic there is a problem with the capacity and level of internal evaluators – there is a great scope for greater involvement of the academic sector, where students of higher years could be involved in basic and process evaluations.
- MN: He responds to the last mentioned – he feels the need to train new evaluators within his university department and within CES.
- BH: Evaluation depends a lot on how big the project is – if it is large, it is appropriate; the question of objectivity of internal evaluation – sometimes it is not even possible to do it otherwise (financial aspect); secondly – it is a question of whether we do internal evaluation for ourselves or for someone else. If it is for ourselves, there is no reason why I should not be as honest as possible; if I want to present the results, it is a question of how e.g., a donor can work with admitted errors. If the purpose of the evaluation is just to say how everything went, that’s not right. Were it possible for admitting what went wrong without cutting funding, it would be better for everyone: that would bring more interesting outputs. OSF also administers Norway Grants and this puts them closer to the government administration, they have been given indicators by their donors and there is little room for change; to balance this, they try to focus on qualitative indicators and lead the supported organisations to learn from their mistakes. It is not to be called evaluation, but perhaps an assessment, a reflection… It certainly makes sense, not even through questionnaires, but perhaps through the case study itself, through which conclusions are clearly formulated and what comes out of it is identified. It’s good for organisations to publish in open data so that organisations can learn from each other. It is also important how the whole project is set up and at the same time what the capacity of the donors is in general – it is hard to build trust if there is no space for dialogue between beneficiaries and donors.
- MN: Then what would be practical tips for effective communication between donors and beneficiaries? A guide that redefines how to work with evaluation can help to reframe evaluation from evaluation to learning (see resources below, author’s note).
- MD: Is evaluation supposed to produce only positive results? Certainly not, because they should also serve to set up future calls; projects should also include process evaluations, which should allow the project to be re-established during the process.
- MN: He responds that the situation has changed in this respect (at least for social innovation projects at the MLSA), currently there is much more willingness to flexibly adjust the project as it progresses and not wait to see how it turns out.
- Suggestion from the audience (Ministry of Education): evaluation is about certain mindset; in any evaluation, tools need to be considered with respect to who the evaluation is addressed to; it is important to think about simplicity, to look for common ground among those to whom it is addressed. But the problem is that trust in the state system is broken in our country, yet even suggestions from the Ministry can be a regular part of the learning process. Evaluation should be a kind of “demilitarised zone” where everyone gets together and talks openly about what works and what doesn’t. This cannot be done in an universal way, but according to individual needs (what one needs to find out).
- ZS: The fear may not just be towards the donor, but also towards the public – admitting mistakes to critics can be dangerous, really threaten the continuation of the project – the governing body may be understanding, but the public, who are distrustful of the non-profit sector, may not be so understanding.
GROUP DISCUSSIONS
1/ Evaluation from the point of view of representatives of the PUBLIC SECTOR and the FOUNDATIONS
(Briefly – the following themes re-emerged: communication, discussion, cooperation, trust based on contact between the donor, the beneficiary and the evaluator. The issue of pre-developed evaluation design was discussed; the accountability for the outcome of the projects and that the mentioned actors should work as a team and cooperate; the last one discussed was the quality of the evaluation assignment. Inspiration was also mentioned: reference groups and evaluation mentoring.)
- ZS: evaluation design as part of project design can prove problematic if the project (and therefore the evaluation questions) changes in time. There may be a proposed design at the beginning, but after three years of implementation, the evaluation may change a lot. The concern then is that, for example in European projects, change is generally quite a problem. Question to NK on how to address this.
- NK: Positive experience with their managing authorities (Prague City Hall + Norway Grants) – evaluation design was not a problem to change; at the same time, evaluation design should not go too detailed.
- MD: For the MLSA: internal evaluation – they do not recommend setting the evaluation design in too much detail, and at the same time there should be more room for flexibility. For external evaluation – if it is tendered, it is harder to alter it as the subject of the contract needs to be fulfilled. In any case, the goal should still be the same – to evaluate the intervention well, so one should certainly not stay with the same evaluation design if the project has changed.
- MN: He agrees that it is good to have the evaluation design in advance and he also has good experience with the fact that it was possible to make partial changes in it during communication with the managing authority.
- MD: The problem is that this means that the organizations would pay an evaluator in advance (before submitting the project, for the creation of the evaluation design) – but the organizations are far from being ready for this with their capacities at the moment in the Czech Republic (they are waiting for a subsidy, then they build a team on the basis of this – their personnel functioning is based purely on funds).
- JD: He has experience with a project that is evolving; the evaluator does not need to be there at the very beginning, but it is advisable to be there as early as possible. It’s good for the evaluator to communicate with the developer – they can bring in good questions to help set the project up well + build trust, which then helps the evaluation results to be worked with. Who is actually responsible for the evaluation results? The evaluator and the client, the latter in being open in communicating with the evaluator. The internal evaluator has an advantage because they have great insight and can use resources that external evaluators can never have. With an external evaluation, then, the recommendations obtained may be unfeasible because they do not respect the constraints in which the institution or organization operates.
- There should be shared responsibility for the results of interventions between the donor and the project implementer
- The representative of the Ministry of Education agrees: while the beneficiary and the managing authority can influence quite a lot, but it is not possible to change many things – certain control mechanisms cannot be reversed at the level of individual calls, they have been in place for years through various operational periods; for example, regarding the bureaucratic burden – to some extent there will always be administration; however, something can be done, so it is important to draw attention to it.
- The representative of the CES adds – based on his experience from, among others, the Ministry of Foreign Affairs, he confirms this, i.e., if there is no good call, it is difficult to create good projects; at the same time, the quality aspect is essential, we cannot get high quality for the lowest price. He does not agree that some things cannot be changed; there should always be a way to change.
- JD: External evaluation – it is bad practice if the external evaluator sets up the evaluation at the beginning, then disappears from the project and comes back with an evaluation at the end – he/she should continuously ask what, how and why the implementer (beneficiary, grantee) is doing. By not being internal, the evaluator has no insight, so even if he/she has a design, it is good to prepare for what recommendations to give, what to highlight, etc. They will find out a lot of things, but they have to choose what is relevant to pass on. In order not to be stuck with general platitudes and at the same time to be able to summarise in a concise way the most important things, which he also illustrates with concrete examples, he should have a good insight into the project itself and into who the evaluation results are addressed to and what language they speak. Based on this, the evaluation results can then also be disseminated to more stakeholders.
- MN: prefers small projects where it is easier to apply the principles of a learning organisation; the problem here may be that the evaluation brief is simply copied from the past, there is no deeper thinking behind it. If organisations want to learn, a well-defined contract is an important start in this respect; here it is very useful to draw on the methodology of the Czech Evaluation Society (see sources below, author’s note).
- A representative of the CES – agrees, stating that the formats of reference groups + mentoring evaluations have proven to be successful – the CES helps in this: both to set up the contract and then to communicate with the person who wins it.
- MD – the situation is different in specialised departments and where one person is needed to take care of other things. However, there is a shift, e.g., the tender documentation for the OPZZ is now much more specific and clearer than it used to be.
- Another representative of the CES adds to JD regarding the delivery of outputs – this discussion has been going on for a long time, perhaps related to knowledge brokering; however, while clients often want it, reports cannot be punchy and short without becoming generic platitudes. Further, when there are a lot of things to analyse, it would be great to be able to have different types of meetings, different outputs – but then it is to be expected that this costs time (and subsequently it is expensive).
2/ Evaluation from the point of view of the representatives of the FOUNDATIONS and the BENEFICIARIES
- To what extent and under what circumstances do donors trust evaluation results?
- It’s good to really communicate with the donor and establish a closer relationship with them, to build trust.
- For the Ministry of Education – in the past it was difficult to achieve this because there were a large number of projects for a small number of staff, which led to overload – this should change in the future.
- What do we use the evaluation results for?
- The Ministry of Education gives an example of how evaluation is used – they found out, for example, that in the case of DVPP courses, teachers share their experiences with each other and prefer certain types of courses (practical, with didactic demonstrations and tools, where the lecturer can advise on specific situations, summer school type, with the possibility to share their experience).
- MN adds that there is a project in place at PII to work systematically with impact-focused evaluations.
3/ Evaluation from the point of view of the representatives of the PUBLIC SECTOR AND BENEFICIARIES
- What do we use the evaluation results for?
- Good communication between donor and beneficiaries is essential
- At the same time, one of OSF’s priorities is to disseminate evaluation results among its beneficiaries so that they can learn from each other.
- To what extent and under what circumstances do donors trust communication outcomes?
- There is given an example of a situation where the results of an external evaluation (a large counterfactual study) were not positively received by the political representation in one project, which led to a decrease in support for this type of project in the city.
- MN reacts on this that he perceives it differently, as an example of good practice, because other cities have started to support these types of projects on the basis of this study.
Recommended links and resources (from the roundtable chat):
- www.youth-impact.eu
- https://medium.com/czecheval/tak-tedy-o-intern%C3%AD-evaluaci-4c9b8c63b5a8
- https://taylornewberry.ca/wp-content/uploads/2021/12/Learning-and-Evaluation-Plan-Workbook.pdf
- https://czecheval.cz/cs/Aktivity/Metodika-pro-zadavani-evaluaci
- https://czecheval.cz/cs/Skoleni-a-kurzy/Priprava-zadavaci-dokumentace
- More about evaluations in EU funds: https://www.dotaceeu.cz/cs/evropske-fondy-v-cr/narodni-organ-pro-koordinaci/evaluace
Invitations:
- Youth Impact:
- The last series of Youth Impact workshops https://www.youth-impact.eu/2022/03/14/jak-na-evaluaci-ktera-podporuje-podnikavost-a-zamestnanost-mladych-lidi/
- The last two Youth Impact roundtables (invitation will be on FB and www of the project)
- Final Youth Impact Conference (invitation will be on FB and www of the project)
- CES:
- One-day course on Commissioning Evaluations – can be repeated if there were enough interested participants. The Evaluation Minimum course (two days) will take place in late March or early April.
- The annual conference of the Czech Evaluation Society will take place on 15–16 June 2022.
Tato zpráva je k dispozici v češtině zde: